dnn classifier (TSUMURA)
Structured Review

Dnn Classifier, supplied by TSUMURA, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/dnn classifier/product/TSUMURA
Average 90 stars, based on 1 article reviews
Images
1) Product Images from "Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network"
Article Title: Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network
Journal: Frontiers in Neuroinformatics
doi: 10.3389/fninf.2021.802938
Figure Legend Snippet: Applications of counterfactual explanation in fMRI. The example illustrates an application of counterfactual explanation to a misclassification by a DNN classifier. (A) In this example, a DNN classifier incorrectly assigned EMOTION to a map of brain activation obtained in a MOTOR task. Because of the black-box nature of the DNN classifier, it is difficult to explain why the misclassification occurred. (B) A generative neural network for counterfactual brain activation (CAG) minimally transforms the real brain activation in (A) so that the DNN classifier now assigns MOTOR to the morphed activation (counterfactual activation). (C) Counterfactual explanation of misclassification in (A) can be obtained by taking the difference between the real activation and the counterfactual activation. In this example, the real brain activation would have been classified (correctly) as MOTOR if red (blue) brain regions in the counterfactual explanation had been more (less) active.
Techniques Used: Activation Assay
Figure Legend Snippet: DNN classifier for brain activity decoding. (A) Following the standard procedure developed by HCP (Glasser et al., ), neocortex in the two hemispheres was mapped to two cortical sheets. Each neocortical activity image was mapped to the two sheets, which was then input to the DNN classifier (for details, see Tsumura et al., ). (B) Model architecture of the DNN classifier. The input was a picture containing two sheets of cortical activations. The picture was downsampled for later processing by the generative neural network. The DNN classifier was a deep convolutional network similar to the one described in our previous study (Tsumura et al., ). The output of the DNN classifier was one-hot vectors representing seven behavioral tasks in the HCP dataset. (C) Training history of the transfer learning. Test accuracy (blue) and validation accuracy (magenta) are shown for five replicates. Note that the chance level is 14.3% (1/7). (D) Profile of the classifier's decision (confusion matrix) in the validation set.
Techniques Used: Activity Assay, Biomarker Discovery
Figure Legend Snippet: Decision profile of DNN classifier on counterfactual activations.
Techniques Used:
Figure Legend Snippet: Counterfactual explanation of correct classification. (A) Schematics of the question asked in this analysis. In this example, the DNN classifier correctly assigned a “MOTOR” label to a real brain activation in the MOTOR task. Here, we want to interrogate this correct decision. Specifically, we ask a question “why did the classifier assign MOTOR instead of EMOTION?” (B–E) Examples of counterfactual explanation. (B) A population average map of real brain activation in the MOTOR task. (C) A population average map of counterfactual activation obtained by transforming the map in (A) to EMOTION. Transformation was conducted for each activation map and then averaged across the population. (D) Pixel-by-pixel subtraction of maps in (B) and (A) that serves as counterfactual explanation. This map explains why the map was classified as MOTOR but not EMOTION. (E) Simple difference between the average of real activations in the EMOTION and MOTOR tasks.
Techniques Used: Activation Assay, Transformation Assay
Figure Legend Snippet: Decision of DNN classifier on counterfactual activations obtained from correctly classified brain activations.
Techniques Used: Control
Figure Legend Snippet: Counterfactual explanation of incorrect classification. (A) Schematics of the question asked in this analysis. In this example, the DNN classifier incorrectly assigned a “SOCIAL” label to a real brain activation in the EMOTION task. Here, we want to interrogate this incorrect decision. Specifically, we ask a question “why did the classifier (incorrectly) assign EMOTION instead of SOCIAL?” (B–E) Example of counterfactual explanation. (B) A single brain activation map for EMOTION that was incorrectly classified as SOCIAL by the DNN classifier. (C) A map of counterfactual activation obtained by transforming the map in (A) to SOCIAL. (D) Pixel-by-pixel subtraction of maps in (B) and (A) that serves as counterfactual explanation. This map explains why the map was incorrectly classified as SOCIAL but not EMOTION. (E) Simple difference between the average of real activations in the EMOTION and the single activation map for SOCIAL shown in (A) .
Techniques Used: Activation Assay
Figure Legend Snippet: Decision of the DNN classifier on counterfactual activations obtained from misclassified brain activations.
Techniques Used: Control
Figure Legend Snippet: Counterfactual exaggeration of brain activation. (A) Schematic of counterfactual exaggeration. A brain activation (MOTOR task in this example) was iteratively transformed toward MOTOR by CAG. This iterative transformation accentuates (exaggerates) image features that biases the classifier decision toward MOTOR. (B–D) Example of counterfactual exaggeration. A brain activation in the EMOTION task (B) was iteratively transformed toward EMOTION eight times. Images after third (C) and eighth (D) transformations are shown. (E) Subtle image feature enhanced by counterfactual exaggeration was isolated by taking the difference of exaggerated images. In this example, differences between exaggerated images in (C) and (D) were calculated. The resulting difference image showed a texture-like pattern. (F) Example of texture-like feature extracted by counterfactual exaggeration (top). Bottom panel shows the texture-like patterns added to randomly chosen raw brain activations (middle). See also for another example. (G) Decisions of the DNN classifier to brain activations with texture-like patterns added. Each dot represents one example texture ( N = 12. Methods for details). Bar graph shows the mean and the standard deviation. The classifier was significantly biased toward the class of texture-like patterns (*, p < 0.001, Wilcoxon's sign rank test). Chance level was one of seven.
Techniques Used: Activation Assay, Transformation Assay, Isolation, Standard Deviation
